25 research outputs found

    Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    Get PDF
    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art

    Detecting Anatomical Landmarks for Fast Alzheimer’s Disease Diagnosis

    Get PDF
    Structural magnetic resonance imaging (MRI) is a very popular and effective technique used to diagnose Alzheimer’s disease (AD). The success of computer-aided diagnosis methods using structural MRI data is largely dependent on the two time-consuming steps: 1) nonlinear registration across subjects, and 2) brain tissue segmentation. To overcome this limitation, we propose a landmark-based feature extraction method that does not require nonlinear registration and tissue segmentation. In the training stage, in order to distinguish AD subjects from healthy controls (HCs), group comparisons, based on local morphological features, are first performed to identify brain regions that have significant group differences. In general, the centers of the identified regions become landmark locations (or AD landmarks for short) capable of differentiating AD subjects from HCs. In the testing stage, using the learned AD landmarks, the corresponding landmarks are detected in a testing image using an efficient technique based on a shape-constrained regression-forest algorithm. To improve detection accuracy, an additional set of salient and consistent landmarks are also identified to guide the AD landmark detection. Based on the identified AD landmarks, morphological features are extracted to train a support vector machine (SVM) classifier that is capable of predicting the AD condition. In the experiments, our method is evaluated on landmark detection and AD classification sequentially. Specifically, the landmark detection error (manually annotated versus automatically detected) of the proposed landmark detector is 2.41mm, and our landmark-based AD classification accuracy is 83.7%. Lastly, the AD classification performance of our method is comparable to, or even better than, that achieved by existing region-based and voxel-based methods, while the proposed method is approximately 50 times faster

    Domain Transfer Learning for MCI Conversion Prediction

    Get PDF
    Machine learning methods have been increasingly used to predict the conversion of mild cognitive impairment (MCI) to Alzheimer's disease (AD), by classifying MCI converters (MCI-C) from MCI non-converters (MCI-NC). However, most of existing methods construct classifiers using only data from one particular target domain (e.g., MCI), and ignore data in the other related domains (e.g., AD and normal control (NC)) that could provide valuable information to promote the performance of MCI conversion prediction. To this end, we develop a novel domain transfer learning method for MCI conversion prediction, which can use data from both the target domain (i.e., MCI) and the auxiliary domains (i.e., AD and NC). Specifically, the proposed method consists of three key components: 1) a domain transfer feature selection (DTFS) component that selects the most informative feature-subset from both target domain and auxiliary domains with different imaging modalities, 2) a domain transfer sample selection (DTSS) component that selects the most informative sample-subset from the same target and auxiliary domains with different data modalities, and 3) a domain transfer support vector machine (DTSVM) classification component that fuses the selected features and samples to separate MCI-C and MCI-NC patients. We evaluate our method on 202 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) with MRI, FDG-PET and CSF data. The experimental results show that the proposed method can classify MCI-C patients from MCI-NC patients with an accuracy of 79.4%, with the aid of additional domain knowledge learned from AD and NC

    Hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition

    Get PDF
    Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods

    Evaluation of machine learning algorithms for treatment outcome prediction in patients with epilepsy based on structural connectome data

    Get PDF
    The objective of this study is to evaluate machine learning algorithms aimed at predicting surgical treatment outcomes in groups of patients with temporal lobe epilepsy (TLE) using only the structural brain connectome. Specifically, the brain connectome is reconstructed using white matter fiber tracts from presurgical diffusion tensor imaging. To achieve our objective, a two-stage connectome-based prediction framework is developed that gradually selects a small number of abnormal network connections that contribute to the surgical treatment outcome, and in each stage a linear kernel operation is used to further improve the accuracy of the learned classifier. Using a 10-fold cross validation strategy, the first stage in the connectome-based framework is able to separate patients with TLE from normal controls with 80% accuracy, and second stage in the connectome-based framework is able to correctly predict the surgical treatment outcome of patients with TLE with 70% accuracy. Compared to existing state-of-the-art methods that use VBM data, the proposed two-stage connectome-based prediction framework is a suitable alternative with comparable prediction performance. Our results additionally show that machine learning algorithms that exclusively use structural connectome data can predict treatment outcomes in epilepsy with similar accuracy compared with "expert-based" clinical decision. In summary, using the unprecedented information provided in the brain connectome, machine learning algorithms may uncover pathological changes in brain network organization and improve outcome forecasting in the context of epilepsy

    Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection: Hierarchical and symmetric infant image registration

    Get PDF
    To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old

    Early brain development in infants at high risk for autism spectrum disorder

    Get PDF
    Brain enlargement has been observed in children with Autism Spectrum Disorder (ASD), but the timing of this phenomenon and its relationship to the appearance of behavioral symptoms is unknown. Retrospective head circumference and longitudinal brain volume studies of 2 year olds followed up at age 4 years, have provided evidence that increased brain volume may emerge early in development.1, 2 Studies of infants at high familial risk for autism can provide insight into the early development of autism and have found that characteristic social deficits in ASD emerge during the latter part of the first and in the second year of life3,4. These observations suggest that prospective brain imaging studies of infants at high familial risk for ASD might identify early post-natal changes in brain volume occurring before the emergence of an ASD diagnosis. In this prospective neuroimaging study of 106 infants at high familial risk of ASD and 42 low-risk infants, we show that cortical surface area hyper-expansion between 6-12 months of age precedes brain volume overgrowth observed between 12-24 months in the 15 high-risk infants diagnosed with autism at 24 months. Brain volume overgrowth was linked to the emergence and severity of autistic social deficits. A deep learning algorithm primarily using surface area information from brain MRI at 6 and 12 months of age predicted the diagnosis of autism in individual high-risk children at 24 months (with a positive predictive value of 81%, sensitivity of 88%). These findings demonstrate that early brain changes unfold during the period in which autistic behaviors are first emerging

    The ENIGMA-Epilepsy working group: Mapping disease from large data sets

    Get PDF
    Epilepsy is a common and serious neurological disorder, with many different constituent conditions characterized by their electro clinical, imaging, and genetic features. MRI has been fundamental in advancing our understanding of brain processes in the epilepsies. Smaller‐scale studies have identified many interesting imaging phenomena, with implications both for understanding pathophysiology and improving clinical care. Through the infrastructure and concepts now well‐established by the ENIGMA Consortium, ENIGMA‐Epilepsy was established to strengthen epilepsy neuroscience by greatly increasing sample sizes, leveraging ideas and methods established in other ENIGMA projects, and generating a body of collaborating scientists and clinicians to drive forward robust research. Here we review published, current, and future projects, that include structural MRI, diffusion tensor imaging (DTI), and resting state functional MRI (rsfMRI), and that employ advanced methods including structural covariance, and event‐based modeling analysis. We explore age of onset‐ and duration‐related features, as well as phenomena‐specific work focusing on particular epilepsy syndromes or phenotypes, multimodal analyses focused on understanding the biology of disease progression, and deep learning approaches. We encourage groups who may be interested in participating to make contact to further grow and develop ENIGMA‐Epilepsy
    corecore